-
-
Notifications
You must be signed in to change notification settings - Fork 11k
[Kernel] Add ModelOpt FP4 Checkpoint Support #12520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kernel] Add ModelOpt FP4 Checkpoint Support #12520
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
3eae6f0 to
c20e87d
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
06f226e to
591bde8
Compare
2abdddd to
550b11d
Compare
1aaebfb to
dd4fad2
Compare
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
dd4fad2 to
bc07ee7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just a few comments thanks!
| def get_quant_method(self, layer: torch.nn.Module, | ||
| prefix: str) -> Optional["QuantizeMethodBase"]: | ||
| from vllm.attention.layer import Attention # Avoid circular import | ||
| if isinstance(layer, LinearBase): | ||
| return ModelOptNvFp4LinearMethod(self) | ||
| elif isinstance(layer, Attention): | ||
| return ModelOptFp8KVCacheMethod(self) | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we are missing the check between prefix and exclude_modules to ignore layers, as noted in the model config https://huggingface.co/nvidia/DeepSeek-R1-FP4/blob/761db7ea1b0b750e29fa78ebd2ce449e619809c5/hf_quant_config.json#L10-L15
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DeepseekR1-FP4 doesn't work with this PR(We are missing support for FP4 Group Gemm). I added back the exclude_modules and now load the linear layer classes accordingly. The actual correctness will be checked after adding support for the R1-FP4 model in a future PR.
| # for input only the contracting dimension has a constraint. | ||
| x_m, _ = x.shape | ||
| w_n, _ = layer.weight.shape | ||
| output_shape = [x_m, w_n] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the constraint? There seems to be none
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this comment since we explicitly check the correctness of x's shape in scaled_fp4_quant. The constraint is that the innermost dimension must be divisible by block_size.
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
|
Thank you for your review, @mgoin! |
|
@mgoin thanks for your review. can you please take another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @pavanimajety!
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Head branch was pushed to by a user without write access
Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Richard Liu <ricliu@google.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Add support for ModelOpt FP4 Checkpoint.